Gradient Method with Retards and Generalizations

نویسندگان

  • A. FRIEDLANDER
  • M. RAYDAN
چکیده

A generalization of the steepest descent and other methods for solving a large scale symmetric positive definitive system Ax = b is presented. Given a positive integer m, the new iteration is given by xk+1 = xk −λ(xν(k))(Axk − b), where λ(xν(k)) is the steepest descent step at a previous iteration ν(k) ∈ {k, k−1, . . . , max{0, k−m}}. The global convergence to the solution of the problem is established under a more general framework, and numerical experiments are performed that suggest that some strategies for the choice of ν(k) give rise to efficient methods for obtaining approximate solutions of the system.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Gradient Method with Dynamical Retards for Large-scale Optimization Problems

We consider a generalization of the gradient method with retards for the solution of large-scale unconstrained optimization problems. Recently, the gradient method with retards was introduced to find global minimizers of large-scale quadratic functions. The most interesting feature of this method is that it does not involve a decrease in the objective function, which allows fast local convergen...

متن کامل

Analysis and Generalizations of the Linearized Bregman Method

This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem whenever its smooth parameter α is greater than a certain value. The analysis is based on showing ...

متن کامل

A Mathematical Optimization Model for Solving Minimum Ordering Problem with Constraint Analysis and some Generalizations

In this paper, a mathematical method is proposed to formulate a generalized ordering problem. This model is formed as a linear optimization model in which some variables are binary. The constraints of the problem have been analyzed with the emphasis on the assessment of their importance in the formulation. On the one hand, these constraints enforce conditions on an arbitrary subgraph and then g...

متن کامل

Ordered Weighted Averaging Operators and their Generalizations with Applications in Decision Making

The definition of ordered weighted averaging (OWA) operators and their applications in decision making are reviewed. Also, some generalizations of OWA operators are studied and then, the notion of 2-symmetric OWA operators is introduced. These generalizations are illustrated by some examples.

متن کامل

Functional Bundle Methods

Recently, gradient descent based optimization procedures and their functional gradient based boosting generalizations have shown strong performance across a number of convex machine learning formulations. They are particularly alluring for structured prediction problems due to their low memory requirements [5], and recent theoretical work has show that they converge fast across a wide range of ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999